🤖 AI Summary
It remains challenging to determine whether chain-of-thought (CoT) reasoning in large language models reflects genuine internal reasoning or merely superficial, post-hoc rationalization.
Method: This paper introduces Concept Walk, the first framework to model CoT reasoning explicitly in semantic concept space. It employs contrastive learning to extract interpretable concept directions, then projects hidden-layer activations onto these directions to dynamically track the evolution of internal representations throughout reasoning.
Contribution/Results: Concept Walk enables fine-grained diagnosis of reasoning faithfulness: on simple tasks, perturbations decay rapidly—indicating decorative CoT; on difficult tasks, perturbations induce sustained, directional shifts in concept activation—confirming substantive reasoning. Experiments on Qwen3-4B validate its effectiveness, establishing a novel methodology for model interpretability grounded in dynamic concept-level analysis.
📝 Abstract
Chain-of-thought (CoT) traces promise transparency for reasoning language models, but prior work shows they are not always faithful reflections of internal computation. This raises challenges for oversight: practitioners may misinterpret decorative reasoning as genuine. We introduce Concept Walk, a general framework for tracing how a model's internal stance evolves with respect to a concept direction during reasoning. Unlike surface text, Concept Walk operates in activation space, projecting each reasoning step onto the concept direction learned from contrastive data. This allows us to observe whether reasoning traces shape outcomes or are discarded. As a case study, we apply Concept Walk to the domain of Safety using Qwen 3-4B. We find that in 'easy' cases, perturbed CoTs are quickly ignored, indicating decorative reasoning, whereas in 'hard' cases, perturbations induce sustained shifts in internal activations, consistent with faithful reasoning. The contribution is methodological: Concept Walk provides a lens to re-examine faithfulness through concept-specific internal dynamics, helping identify when reasoning traces can be trusted and when they risk misleading practitioners.