๐ค AI Summary
This work investigates the causal influence of explicit reasoning traces on final answer generation in large reasoning models (LRMs). Addressing the lack of clarity regarding how reasoning processes mechanistically affect answers, we introduce โReasoning-Focused Headsโ (RFHs)โa novel concept identifying attention heads that track reasoning trajectories. We localize and validate RFHs at intermediate layers through integrated attention analysis, activation patching interventions, and empirical evaluation. Experiments demonstrate that explicit reasoning substantially improves answer quality; moreover, targeted perturbations of key reasoning tokens attended to by RFHs consistently alter final outputs, confirming a causal dependency of answers on reasoning paths. To our knowledge, this is the first systematic characterization of directed information flow from reasoning to answers within LRMs. Our findings establish a new paradigm for interpretable and controllable reasoning, offering principled foundations for model introspection, diagnostic intervention, and reasoning-aware architecture design.
๐ Abstract
Large Reasoning Models (LRMs) generate explicit reasoning traces alongside final answers, yet the extent to which these traces influence answer generation remains unclear. In this work, we conduct a three-stage investigation into the interplay between reasoning and answer generation in three distilled DeepSeek R1 models. First, through empirical evaluation, we demonstrate that including explicit reasoning consistently improves answer quality across diverse domains. Second, attention analysis reveals that answer tokens attend substantially to reasoning tokens, with certain mid-layer Reasoning-Focus Heads (RFHs) closely tracking the reasoning trajectory, including self-reflective cues. Third, we apply mechanistic interventions using activation patching to assess the dependence of answer tokens on reasoning activations. Our results show that perturbations to key reasoning tokens can reliably alter the final answers, confirming a directional and functional flow of information from reasoning to answer. These findings deepen our understanding of how LRMs leverage reasoning tokens for answer generation, highlighting the functional role of intermediate reasoning in shaping model outputs. Our data and code are publicly available at href{https://aka.ms/R2A-code}{this URL}.