🤖 AI Summary
This work addresses the post-hoc rationalization problem in existing Reverse Chain-of-Thought (Reverse CoT) methods, where exposure to the ground-truth answer biases reasoning by serving as a cognitive anchor. The study presents the first systematic quantification of this anchoring effect through a tripartite evaluation framework encompassing lexical, entropy-based, and probabilistic metrics. Drawing on the ironic process theory from cognitive psychology, the authors propose a novel Structured Skeleton-guided Reasoning (SSR) paradigm that first generates an answer-agnostic functional skeleton and subsequently constructs a complete reasoning chain based on this scaffold. They further introduce a distillation-based fine-tuning strategy, SSR-D. Experimental results demonstrate that SSR-D achieves up to a 10% performance gain over semantic suppression baselines across multiple open-ended reasoning benchmarks and exhibits strong out-of-distribution generalization capabilities.
📝 Abstract
Reverse Chain-of-Thought Generation (RCG) synthesizes reasoning traces from query-answer pairs, but runs the risk of producing post-hoc rationalizations: when models can see the answer during generation, the answer serves as a cognitive anchor that shapes the entire explanation. We formalize this phenomenon through a three-level measurement hierarchy: lexical, entropic, and probabilistic anchoring, each captures surface artifacts, entropy dynamics, and latent answer dependence, respectively. We analyze semantic suppression, the intuitive mitigation strategy that instructs models to ignore the answer, to find out its counterproduction: while it reduces lexical overlap, it paradoxically increases entropic and probabilistic anchoring. Drawing on Ironic Process Theory from cognitive psychology, we attribute this failure to active monitoring of the forbidden answer, which inadvertently deepens dependence on it. To break this cycle, we propose Structural Skeleton-guided Reasoning (SSR), a two-phase approach that first generates an answer-invariant functional skeleton structure, then uses this skeleton to guide full trace generation. By redirecting the information flow to structural planning rather than answer monitoring, SSR consistently reduces anchoring across all three levels. We further introduce Distilled SSR (SSR-D), which fine-tunes models on teacher-generated SSR traces to ensure reliable structural adherence. Experiments across open-ended reasoning benchmarks demonstrate that SSR-D achieves up to 10% improvement over suppression baselines while preserving out-of-distribution (OOD) generalization.