🤖 AI Summary
This work investigates the significant degradation in reasoning generalization of large language models when inference steps exceed their training distribution. Through systematic analysis across multiple domains, the authors identify that reasoning failures predominantly stem from a few critical positions, driven by specific “error-propagating attention heads” (EP heads) that amplify incorrect reasoning paths while suppressing correct ones. To address this, they propose a lightweight test-time intervention method that dynamically identifies and deactivates EP heads based on real-time attention analysis and error localization, thereby correcting reasoning behavior on the fly. Extensive experiments demonstrate that this approach substantially improves out-of-distribution reasoning performance across diverse tasks and state-of-the-art large language models, confirming its effectiveness and broad applicability.
📝 Abstract
Chain-of-thought (CoT) reasoning has become the standard paradigm for enabling Large Language Models (LLMs) to solve complex problems. However, recent studies reveal a sharp performance drop in reasoning hop generalization scenarios, where the required number of reasoning steps exceeds training distributions while the underlying algorithm remains unchanged. The internal mechanisms driving this failure remain poorly understood. In this work, we conduct a systematic study on tasks from multiple domains, and find that errors concentrate at token positions of a few critical error types, rather than being uniformly distributed. Closer inspection reveals that these token-level erroneous predictions stem from internal competition mechanisms: certain attention heads, termed erroneous processing heads (ep heads), tip the balance by amplifying incorrect reasoning trajectories while suppressing correct ones. Notably, removing individual ep heads during inference can often restore the correct predictions. Motivated by these insights, we propose test-time correction of reasoning, a lightweight intervention method that dynamically identifies and deactivates ep heads in the reasoning process. Extensive experiments across different tasks and LLMs show that it consistently improves reasoning hop generalization, highlighting both its effectiveness and potential.