Mapping Faithful Reasoning in Language Models

📅 2025-10-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
It remains challenging to determine whether chain-of-thought (CoT) reasoning in large language models reflects genuine internal reasoning or merely superficial, post-hoc rationalization. Method: This paper introduces Concept Walk, the first framework to model CoT reasoning explicitly in semantic concept space. It employs contrastive learning to extract interpretable concept directions, then projects hidden-layer activations onto these directions to dynamically track the evolution of internal representations throughout reasoning. Contribution/Results: Concept Walk enables fine-grained diagnosis of reasoning faithfulness: on simple tasks, perturbations decay rapidly—indicating decorative CoT; on difficult tasks, perturbations induce sustained, directional shifts in concept activation—confirming substantive reasoning. Experiments on Qwen3-4B validate its effectiveness, establishing a novel methodology for model interpretability grounded in dynamic concept-level analysis.

Technology Category

Application Category

📝 Abstract
Chain-of-thought (CoT) traces promise transparency for reasoning language models, but prior work shows they are not always faithful reflections of internal computation. This raises challenges for oversight: practitioners may misinterpret decorative reasoning as genuine. We introduce Concept Walk, a general framework for tracing how a model's internal stance evolves with respect to a concept direction during reasoning. Unlike surface text, Concept Walk operates in activation space, projecting each reasoning step onto the concept direction learned from contrastive data. This allows us to observe whether reasoning traces shape outcomes or are discarded. As a case study, we apply Concept Walk to the domain of Safety using Qwen 3-4B. We find that in 'easy' cases, perturbed CoTs are quickly ignored, indicating decorative reasoning, whereas in 'hard' cases, perturbations induce sustained shifts in internal activations, consistent with faithful reasoning. The contribution is methodological: Concept Walk provides a lens to re-examine faithfulness through concept-specific internal dynamics, helping identify when reasoning traces can be trusted and when they risk misleading practitioners.
Problem

Research questions and friction points this paper is trying to address.

Evaluating faithfulness of reasoning traces in language models
Tracing internal concept evolution during model reasoning processes
Distinguishing genuine reasoning from decorative model outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tracing internal concept evolution in activation space
Projecting reasoning steps onto learned concept directions
Identifying decorative versus faithful reasoning through perturbations
🔎 Similar Papers
No similar papers found.
J
Jiazheng Li
King’s College London
Andreas Damianou
Andreas Damianou
Spotify
Machine Learning
J
J Rosser
University of Oxford
J
José Luis Redondo García
Spotify
Konstantina Palla
Konstantina Palla
Spotify
Machine Learning